vision transformer
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > China (0.04)
- Asia > Middle East > Israel (0.04)
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- Law (0.46)
- Health & Medicine > Diagnostic Medicine (0.46)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- North America > United States > California (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > California > Santa Barbara County > Santa Barbara (0.04)
- Asia > Macao (0.04)
- Asia > China (0.04)
- North America > United States (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > Middle East > Israel (0.04)
- Asia > China > Heilongjiang Province > Harbin (0.04)
- Government (0.47)
- Information Technology (0.46)
- Health & Medicine (0.30)
Understanding Neural Network Binarization with Forward and Backward Proximal Quantizers Yiwei Lu
In neural network binarization, BinaryConnect (BC) and its variants are considered the standard. These methods apply the sign function in their forward pass and their respective gradients are backpropagated to update the weights. However, the derivative of the sign function is zero whenever defined, which consequently freezes training. Therefore, implementations of BC (e.g., BNN) usually replace the derivative of sign in the backward computation with identity or other approximate gradient alternatives. Although such practice works well empirically, it is largely a heuristic or "training trick." We aim at shedding some light on these training tricks from the optimization perspective. Building from existing theory on ProxConnect (PC, a generalization of BC), we (1) equip PC with different forward-backward quantizers and obtain ProxConnect++ (PC++) that includes existing binarization techniques as special cases; (2) derive a principled way to synthesize forward-backward quantizers with automatic theoretical guarantees; (3) illustrate our theory by proposing an enhanced binarization algorithm BNN++; (4) conduct image classification experiments on CNNs and vision transformers, and empirically verify that BNN++ generally achieves competitive results on binarizing these models.
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Asia > Middle East > Saudi Arabia > Northern Borders Province > Arar (0.04)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)